21 research outputs found
A Fast Attribute Based Encryption
Our new Access Control Encryption is an implementation of CP-ABE, when used as part of a key delivery mechanism for an encrypted Data Base. It focuses on improving performance. In ACE the access policies are any predicates over the set of object attributes. Efficiency gains are most pronounced when the DNF representations of policies are compact. In ACE, within the life span of the keys, each user has to perform very few ABE decryptions, regardless of the number of policies accessible to her. Keys to most objects are then computed using only symmetric key decryptions.
ACE is not the first to utilize symmetric key cryptography to reduce the number of CP-ABE operations, when access policies form a multi-level partially ordered set. However, in addition to this significant saving, ACE also takes advantage of overlaps among policies on clauses of the policies, thus further reducing computational complexity.
Let R denote the number of user roles, N be the number of object access policies, k the ratio between the cost of CP-ABE encryption and symmetric key encryption complexities (for 10 attributes k is about a million), and N=cR. The gain factor of ACE vs. a competing hybrid system is kc/(k+c). Usually c>>1, but in some systems it may happen that c<1.
ACE is composed of two sub systems encrypting the same messages: A CP-ABE and a symmetric key encryption system. We prove that ACE is secure under a new Uniform Security Game that we propose and justify, assuming that its building blocks, namely CP-ABE and block ciphers are secure. We require that CP-ABE be secure under the Selective Set Model, and that the block cipher be secure under Multi-User CPA, which we define.
We present Policy Encryption (PE) that can replace CP-ABE as a component of ACE. In many cases, PE is more efficient than CP-ABE. However PE does not prevent collusions. Instead it limits collusions. PE is useful in those cases where owners can compartmentalize objects and subjects, so that within each compartment the owners can tolerate collusions. PE prevents inter compartmental collusions. PE has also the following appealing properties: It relies on older hence more reliable intractability assumption, the Computational Diffie-Hellman assumption, whereas CP-ABE relies on the newer Bilinear Diffie-Hellman assumption. PE uses off-the shelf standard crypto building blocks with one small modification, with proven security. For a small number of compartments PE is much faster than CP-ABE. PE and CP-ABE can coexist in the same system, where ABE is used in high security compartments.
We apply ACE to a practical financial example, the Consolidate Audit Trail (CAT), which is expected to become the largest repository of financial data in the world
On the economic payoff of forensic systems when used to trace Counterfeited Software and content
We analyze how well forensic systems reduce counterfeiting of soft-
ware and content. We make a few idealized assumptions, and show that if the revenues of the producer before the introduction of forensics (Ro) are non-zero then the payoff of forensics is independent of the overall market size, it declines as the ratio between the penalty and the crime (both monetized) goes up, but that this behavior is reversed if Ro = 0. We also show that the payoff goes up as the ratio between success probability with and without forensics grows; however, for typical parameters most of the payoff is already reached when this ratio is 5
Towards a Theory of Trust Based Collaborative Search
Trust Based Collaborative Search is an interactive metasearch engine, presenting the user with clusters of results, based not only on the similarity of content, but also on the similarity of the recommending agents. The theory presented here is broad enough to cover search, browsing, recommendations, demographic profiling, and consumer targeting. We use the term search as an example. We developed a novel general trust theory. In this context, as a special case, we equate trust between agents with the similarity between their search-behaviors. The theory suggests that clusters should be close to maximal similarity within a tolerance dictated by the amount of uncertainty about the vectors of probabilities of attributes representing queries, pages and agents. In addition, we give a new theoretical analysis of clustering tolerances, enabling more judicial decisions about optimal tolerances. Specifically, we show that tolerances should at least be divided by a constant>1 as we descend from one layer in the hierarchical clustering to the next. We also show a promising connection between collaborative search and cryptography: A query plays the role of a cryptogram, the search engine is the cryptanalyst, and the user\u27s intention is the cleartext. Shannon\u27s unicity distance is the length of the search. It is needed to quantify the clustering-tolerance
User impersonation in key certification schemes
In this note we exhibit some weakness in two key certification schemes. We show how a legitimate user can impersonate any other user in an ElGamal-based certification scheme, even if hashing is applied first. Furthermore, we show how anybody can impersonate users of the modular square root key certification scheme, if no hashing occurs before the certification. This shows that it is essential for this certification scheme to hash a message before signing it
Quantifying Trust
Trust is a central concept in public-key cryptography infrastruc-
ture and in security in general. We study its initial quantification and its spread patterns. There is empirical evidence that in trust-based reputation model for virtual communities, it pays to restrict the clusters of agents to small sets with high mutual trust. We propose and motivate a mathematical model, where this phenomenon emerges naturally. In our model, we separate trust values from their weights. We motivate this separation using real examples, and show that in this model, trust converges to the extremes, agreeing with and accentuating the observed phenomenon. Specifically, in our model, cliques of agents of maximal mutual trust are formed, and the trust between any two agents that do not maximally trust each other, converges to zero. We offer initial practical relaxations to the model that preserve some of the theoretical flavor
An Observation about Variations of the Diffie-Hellman Assumption
We generalize the Strong Boneh-Boyen (SBB) signature scheme
to sign vectors; we call this scheme GSBB. We show that if a particular (but
most natural) average case reduction from SBB to GSBB exists, then the
Strong Diffie-Hellman (SDH) and the Computational Diffie-Hellman (CDH)
have the same worst-case complexity
A Note on the Bilinear Diffie-Hellman Assumption
Abstract. The Bi-linear Diffie-Hellman (BDH) intractability assumption is required to establish the security of new Weil-pairing based cryptosystems. BDH is reducible to most of the older believed-to-be-hard discrete-log problems and DH problems, but there is no known reduction from any of those problems to BDH. Let the bilinear mapping be e:G1 X G1->G2, where G1 and G2 are cyclic groups. We show that a many-one reduction from any of the relevant problems to BDH has to include an efficient mapping \phi:G2 ->G1 where \phi(g^{x})=f(x)P. Here g, and P are generators of the corresponding cyclic groups. The function \phi must be used in the reduction either before or after the call to oracle BDH. We show that if f(x)=ax^n+b for any constants a,b,n, then \phi could be used as an oracle for a probabilistic polynomial time solution for Decision Diffie-Hellman in G2. Thus such a reduction is unlikely